Automated design of error-resilient and hardware-efficient deep neural networks
نویسندگان
چکیده
منابع مشابه
Quality Resilient Deep Neural Networks
We study deep neural networks for classification of images with quality distortions. We first show that networks fine-tuned on distorted data greatly outperform the original networks when tested on distorted data. However, finetuned networks perform poorly on quality distortions that they have not been trained for. We propose a mixture of experts ensemble method that is robust to different type...
متن کاملMulti-Level Error-Resilient Neural Networks with Learning
The problem of neural network association is to retrieve a previously memorized pattern from its noisy version using a network of neurons. An ideal neural network should include three components simultaneously: a learning algorithm, a large pattern retrieval capacity and resilience against noise. Prior works in this area usually improve one or two aspects at the cost of the third. Our work take...
متن کاملEfficient Inferencing of Compressed Deep Neural Networks
Large number of weights in deep neural networks makes the models difficult to be deployed in low memory environments such as, mobile phones, IOT edge devices as well as “inferencing as a service” environments on cloud. Prior work has considered reduction in the size of the models, through compression techniques like pruning, quantization, Huffman encoding etc. However, efficient inferencing usi...
متن کاملExploiting Hardware Transactional Memory for Error-Resilient and Energy-Efficient Execution
As semiconductor circuit sizes continue to shrink, execution errors are becoming an increasingly concerning issue. To avoid such errors, designers often turn to “guardband” restrictions on the operating frequency and voltage. If guardbands are too conservative, they limit performance and waste energy, but less conservative guardbands risk moving the system closer to its Critical Operating Point...
متن کاملEfficient Model Averaging for Deep Neural Networks
Large neural networks trained on small datasets are increasingly prone to overfitting. Traditional machine learning methods can reduce overfitting by employing bagging or boosting to train several diverse models. For large neural networks, however, this is prohibitively expensive. To address this issue, we propose a method to leverage the benefits of ensembles without explicitely training sever...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Neural Computing and Applications
سال: 2020
ISSN: 0941-0643,1433-3058
DOI: 10.1007/s00521-020-04969-6